Visually guided object grasping
نویسندگان
چکیده
{ In this paper we describe a method for aligning a robot gripper | or any other end eeector | with an object. An example of such a gripper/object alignment is grasping. The task consists of, rst computing an alignment condition, and second servoing the robot such that it moves and reaches the desired position. A single camera is used to provide the visual feedback necessary to estimate the location of the object to be grasped, to determine the gripper/object alignment condition, and to dynamically control the robot's motion. The original contributions of this paper are the following. Since the camera is not mounted onto the robot it is crucial to express the alignment condition such that it does not depend on the intrinsic and extrinsic camera parameters. Therefore we developp a method for expressing the alignment condition (the relative location of the gripper with respect to the object) such that it is projective invariant, i.e., it is view invariant and it does not require a calibrated camera. The central issue of any image-based servoing method is the estimation of the image Jacobian. This Jacobian relates the 3-D velocity eld of a moving object to the image velocity eld. In the past, the exact estimation of this Jacobian has been avoided because of the lack of a fast and robust method to estimate the pose of a 3-D object with respect to a camera. We discuss the advantage of using an exact image Jacobian with respect to the dynamic behaviour of the servoing process and we describe a new method for computing the pose of an object with respect to the camera that observes this object. This new pose computation method avoids the non-linear structure of the perspective camera model by using a rst-order approximation of perspective | paraperspective. After just a few iterations, the paraperspective pose converges to perspective pose. From an experimental point of view, we show the interest of exact versus approximate Jacobian estimation. Finally, it is important to stress the fact that, in all the visual tasks that are described below, the camera is either not calibrated or poorly calibrated.
منابع مشابه
Grasping without Sight: Insights from the Congenitally Blind
We reach for and grasp different sized objects numerous times per day. Most of these movements are visually-guided, but some are guided by the sense of touch (i.e. haptically-guided), such as reaching for your keys in a bag, or for an object in a dark room. A marked right-hand preference has been reported during visually-guided grasping, particularly for small objects. However, little is known ...
متن کاملDual-task interference is greater in delayed grasping than in visually guided grasping.
Previous kinematic research suggests that visually guided grasping employs an accurate real-time control system in the dorsal stream, whereas delayed grasping relies on less accurate stored information derived by the perceptual system in the ventral stream. We explored these ideas in two experiments combining visually guided and delayed grasping with auditory tasks involving perception-based im...
متن کاملNo evidence for visuomotor priming in a visually guided action task.
Craighero et al. showed that grasping movements were initiated more quickly when the goal object shared the same orientation as a previously seen 'prime' object. Because the goal object was never visible in these experiments, however, it is unclear whether the data should be construed as evidence for a general visuomotor priming effect (as the authors contend), or only as evidence for a more sp...
متن کاملPlanning movements well in advance.
It has been suggested that the metrics of grasping movements directed to visible objects are controlled in real time and are therefore unaffected by previous experience. We tested whether the properties of a visually presented distractor object influence the kinematics of a subsequent grasping movement performed under full vision. After viewing an elliptical distractor object in one of two diff...
متن کاملVisually and memory-guided grasping: Aperture shaping exhibits a time-dependent scaling to Weber’s law
The 'just noticeable difference' (JND) represents the minimum amount by which a stimulus must change to produce a noticeable variation in one's perceptual experience and is related to initial stimulus magnitude (i.e., Weber's law). The goal of the present study was to determine whether aperture shaping for visually derived and memory-guided grasping elicit a temporally dependent or temporally i...
متن کاملThe contributions of vision and haptics to reaching and grasping
This review aims to provide a comprehensive outlook on the sensory (visual and haptic) contributions to reaching and grasping. The focus is on studies in developing children, normal, and neuropsychological populations, and in sensory-deprived individuals. Studies have suggested a right-hand/left-hemisphere specialization for visually guided grasping and a left-hand/right-hemisphere specializati...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- IEEE Trans. Robotics and Automation
دوره 14 شماره
صفحات -
تاریخ انتشار 1998